|
In graph theory, the Shannon capacity of a graph is a graph invariant defined from the number of independent sets of strong graph products. It measures the Shannon capacity of a communications channel defined from the graph, and is upper bounded by the Lovász number, which can be computed in polynomial time. However, the computational complexity of the Shannon capacity itself remains unknown. ==Graph models of communication channels== The Shannon capacity models the amount of information that can be transmitted across a noisy communication channel in which certain signal values can be confused with each other. In this application, the confusion graph or confusability graph describes the pairs of values that can be confused. For instance, suppose that a communications channel has five discrete signal values, any one of which can be transmitted in a single time step. These values may be modeled mathematically as the five numbers 0, 1, 2, 3, or 4 in modular arithmetic modulo 5. However, suppose that when a value ''i'' is sent across the channel, the value that is received is ''i'' + ''ξ'' (mod 5) where ''ξ'' represents the noise on the channel and may be any real number in the open interval −1 < ''ξ'' < 1. Thus, if the recipient receives a value such as 3.6, it is impossible to determine whether it was originally transmitted as a 3 or as a 4: the two values 3 and 4 can be confused with each other. This situation can be modeled by a graph, a cycle ''C''5 of length 5, in which the vertices correspond to the five values that can be transmitted and the edges of the graph represent values that can be confused with each other. For this example, it is possible to choose two values that can be transmitted in each time step without ambiguity, for instance, the values 1 and 3. These values are far enough apart that they can't be confused with each other: when the recipient receives a value ''x'' in the range 0 < ''x'' < 2, it can deduce that the value that was sent must have been 1, and when the recipient receives a value ''x'' in the range 2 < ''x'' < 4, it can deduce that the value that was sent must have been 3. In this way, in ''n'' steps of communication, the sender can communicate up to 2''n'' different messages. Two is the maximum number of values that the recipient can distinguish from each other: every subset of three or more of the values 0, 1, 2, 3, 4 includes at least one pair that can be confused with each other. Even though the channel has five values that can be sent per time step, effectively only two of them can be used with this coding scheme. However, more complicated coding schemes allow a greater amount of information to be sent across the same channel, by using codewords of length greater than one. For instance, suppose that in two consecutive steps the sender transmits one of the five code words "11", "23", "35", "54", or "42". (Here, the quotation marks indicate that these words should be interpreted as strings of symbols, not as decimal numbers.) Each pair of these code words includes at least one position where its values differ by two or more modulo 5; for instance, "11" and "23" differ by two in their second position, while "23" and "42" differ by two in their first position. Therefore, a recipient of one of these code words will always be able to determine unambiguously which one was sent: no two of these code words can be confused with each other. By using this method, in ''n'' steps of communication, the sender can communicate up to 5''n''/2 messages, significantly more than the 2''n'' that could be transmitted with the simpler one-digit code. The effective number of values that can be transmitted per unit time step is (5''n''/2)1/''n'' = √5. In graph-theoretic terms, this means that the Shannon capacity of the 5-cycle is at least √5. As showed, this bound is tight: it is not possible to find a more complicated system of code words that allows even more different messages to be sent in the same amount of time, so the Shannon capacity of the 5-cycle is exactly √5. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Shannon capacity of a graph」の詳細全文を読む スポンサード リンク
|